%%capture
!pip install git+https://github.com/huggingface/datasets pillow rich faiss-gpu sentence_transformers 

To start lets take a look at the image feature. We can use the wonderful rich library to poke around python objects (functions, classes etc.)

from rich import inspect
from datasets.features import features
inspect(features.Image, help=True)
╭───────────────────────── <class 'datasets.features.image.Image'> ─────────────────────────╮
│ class Image(decode: bool = True, id: Union[str, NoneType] = None) -> None:                │
│                                                                                           │
│ Image feature to read image data from an image file.                                      │
│                                                                                           │
│ Input: The Image feature accepts as input:                                                │
│ - A :obj:`str`: Absolute path to the image file (i.e. random access is allowed).          │
│ - A :obj:`dict` with the keys:                                                            │
│                                                                                           │
│     - path: String with relative path of the image file to the archive file.              │
│     - bytes: Bytes of the image file.                                                     │
│                                                                                           │
│   This is useful for archived files with sequential access.                               │
│                                                                                           │
│ - An :obj:`np.ndarray`: NumPy array representing an image.                                │
│ - A :obj:`PIL.Image.Image`: PIL image object.                                             │
│                                                                                           │
│ Args:                                                                                     │
│     decode (:obj:`bool`, default ``True``): Whether to decode the image data. If `False`, │
│         returns the underlying dictionary in the format {"path": image_path, "bytes":     │
│ image_bytes}.                                                                             │
│                                                                                           │
│  decode = True                                                                            │
│   dtype = 'PIL.Image.Image'                                                               │
│      id = None                                                                            │
│ pa_type = StructType(struct<bytes: binary, path: string>)                                 │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

We can see there a few different ways in which we can pass in our images. We'll come back to this in a little while.

A really nice feature of the datasets library (beyond the functionality for processing data, memory mapping etc.) is that you get some nice things 'for free'. One of these is the ability to add a faiss index to a dataset. faiss is a "library for efficient similarity search and clustering of dense vectors".

The datasets docs show an example of using a faiss index for text retrieval. In this post we'll see if we can do the same for images.

The dataset: "Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG"

This is a dataset of images which have been pulled from a collection of digitised books from the British Library. These images come from books across a wide time period and from a broad range of domains. The images were extracted using information contained in the OCR output for each book. As a result it's known which book the images came from but not necessarily anything else about that image i.e. what it is of.

Some attempts to help overcome this have included uploading the images to flickr. This allows people to tag the images or put them into various different categories.

There have also been projects to tag the dataset using machine learning. This work already makes it possible to search by tags but we might want a 'richer' ability to search. For this particular experiment we'll work with a subset of the collections which contain "embellishments". This dataset is a bit smaller so will be better for experimenting with. We can get the full data from the British Library's data repository: https://doi.org/10.21250/db17. Since the full dataset is still fairly large we'll work with a smaller sample.

#!wget -o dig19cbooks-embellishments.zip "https://bl.iro.bl.uk/downloads/ba1d1d12-b1bd-4a43-9696-7b29b56cdd20?locale=en"
!wget -O dig19cbooks-embellishments.zip "https://zenodo.org/record/6224034/files/embellishments_sample.zip?download=1"
--2022-02-22 14:23:09--  https://zenodo.org/record/6224034/files/embellishments_sample.zip?download=1
Resolving zenodo.org (zenodo.org)... 137.138.76.77
Connecting to zenodo.org (zenodo.org)|137.138.76.77|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1102027500 (1.0G) [application/octet-stream]
Saving to: ‘dig19cbooks-embellishments.zip’

dig19cbooks-embelli 100%[===================>]   1.03G  19.1MB/s    in 1m 57s  

2022-02-22 14:25:08 (8.97 MB/s) - ‘dig19cbooks-embellishments.zip’ saved [1102027500/1102027500]

!unzip -q dig19cbooks-embellishments.zip

Now we have the data downloaded we'll try and load it into datasets. There are various ways of doing this. To start with we can grab all of the image files we need.

from pathlib import Path
files = list(Path('embellishments_sample/').rglob("*.jpg"))

Since the file path encodes the year of publication for the book the image came from let's create a function to grab that.

def get_parts(f:Path):
    _,year,fname =  f.parts
    return year, fname

📸 Loading the images

The usual way we would load an image in python would be something like:

from PIL import Image
def load_image(path):
    return Image.open(path)

However, we can leverage datasets to do this for us.

Preparing images for datasets

We'll now load our images. What we'll do is loop through all our images and then load the information for each image into a dictionary. Since we want our keys to represent the columns in our dataset, and the values of those keys to be lists of the relevant data we can use a defaultdict for this.

from collections import defaultdict
data = defaultdict(list)
for file in files:
    year, fname = get_parts(file)
    data['fname'].append(fname)
    data['year'].append(year)
    data['path'].append(str(file)) 

We can now load the from_dict method to create a new dataset.

from datasets import Dataset
dataset = Dataset.from_dict(data)

We can look at one example to see what this looks like.

dataset[0]
{'fname': '003557206_04_000013_1_The Works of the Reverend Dr  Jonathan Swift  Dean of St  Patrick s  Dublin  in twenty volumes  Containing_1772.jpg',
 'path': 'embellishments_sample/1772/003557206_04_000013_1_The Works of the Reverend Dr  Jonathan Swift  Dean of St  Patrick s  Dublin  in twenty volumes  Containing_1772.jpg',
 'year': '1772'}

Loading our images

At the moment our dataset has the filename and full path for each image. However, we want to have an actual image loaded into our dataset. We can use the cast_column method to change the type of a column. Since we might want to keep our original path column, let's first make a copy of that column and store it in a new img column. We can do this by using map and returning a dictionary with the key img and the value from our path column.

dataset = dataset.map(lambda example: {"img": example['path']})
dataset = dataset.cast_column('img', features.Image())

Let's see what this looks like

dataset
Dataset({
    features: ['fname', 'year', 'path', 'img'],
    num_rows: 10000
})

We have an image column but let's check the type of all our features:

dataset.features
{'fname': Value(dtype='string', id=None),
 'img': Image(decode=True, id=None),
 'path': Value(dtype='string', id=None),
 'year': Value(dtype='string', id=None)}

If we access an example and index into the img column we'll see our image 😃

dataset[10]['img']

Push all the things to the hub!

Push all the things to the hub!

One of the super awesome things about the 🤗 ecosystem is the huggingface hub. We can use the hub to access models and datasets. Often this is used for sharing work with others but it can also be a useful tool for work in progress. datasets recently added a push_to_hub method that allows you to push a dataset to the 🤗 hub with minimal fuss. This can be really helpful by allowing you to pass around a dataset with all the transforms etc. already done.

For now we'll push the dataset to the hub and keep it private initially.

Depending on where you are running the code you may need to login using the huggingface-cli login command.

!huggingface-cli login
        _|    _|  _|    _|    _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|_|_|_|    _|_|      _|_|_|  _|_|_|_|
        _|    _|  _|    _|  _|        _|          _|    _|_|    _|  _|            _|        _|    _|  _|        _|
        _|_|_|_|  _|    _|  _|  _|_|  _|  _|_|    _|    _|  _|  _|  _|  _|_|      _|_|_|    _|_|_|_|  _|        _|_|_|
        _|    _|  _|    _|  _|    _|  _|    _|    _|    _|    _|_|  _|    _|      _|        _|    _|  _|        _|
        _|    _|    _|_|      _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|        _|    _|    _|_|_|  _|_|_|_|

        To login, `huggingface_hub` now requires a token generated from https://huggingface.co/settings/token.
        (Deprecated, will be removed in v0.3.0) To login with username and password instead, interrupt with Ctrl+C.
        
Token: 
Login successful
Your token has been saved to /root/.huggingface/token
Authenticated through git-credential store but this isn't the helper defined on your machine.
You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default

git config --global credential.helper store
dataset.push_to_hub('davanstrien/embellishments-sample', private=True)
The repository already exists: the `private` keyword argument will be ignored.

Note:in a previous version of this blog post we had to do a few more steps to ensure images were included sucesfully when using push_to_hub. Thanks to this pull request we no longer need to worry about these extra steps. We just need to make sure embed_external_files=True (which is the default behaviour).

Switching machines

At this point I've created a dataset and moved it to the huggingface hub. This means it is possible to pickup the work/dataset elsewhere.

In this particular example, having access to a GPU is important. Using the 🤗 hub as a way to pass around our data we could start on a laptop and pick up the work on Google Colab.

If we've moved to a new machine you may need to login again. Once we've done this we can load our dataset

from datasets import load_dataset

dataset = load_dataset("davanstrien/embellishments-sample", use_auth_token=True)
Using custom data configuration davanstrien--embellishments-sample-01278a2d22ba110d
Downloading and preparing dataset None/None (download: 1.33 MiB, generated: 146.70 MiB, post-processed: Unknown size, total: 148.03 MiB) to /root/.cache/huggingface/datasets/parquet/davanstrien--embellishments-sample-01278a2d22ba110d/0.0.0/0b6d5799bb726b24ad7fc7be720c170d8e497f575d02d47537de9a5bac074901...
Dataset parquet downloaded and prepared to /root/.cache/huggingface/datasets/parquet/davanstrien--embellishments-sample-01278a2d22ba110d/0.0.0/0b6d5799bb726b24ad7fc7be720c170d8e497f575d02d47537de9a5bac074901. Subsequent calls will reuse this data.

Creating embeddings 🕸

We now have a dataset with a bunch of images in it. To begin creating our image search app we need to create some embeddings for these images. There are various ways in which we can try and do this but one possible way is to use the clip models via the sentence_transformers library. The clip model from OpenAI learns a joint representation for both images and text which is very useful for what we want to do since we want to be able to input text and get back an image.

We can download the model using the SentenceTransformer class.

from sentence_transformers import SentenceTransformer, util

model = SentenceTransformer('clip-ViT-B-32')
ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.

This model will take as input either an image or some text and return an embedding. We can use the datasets map method to encode all our images using this model. When we call map we return a dictionary with the key embeddings containing the embeddings returned by the embedding model. We also pass device='cuda' when we call model, this ensures that we're doing the encoding on the GPU.

ds_with_embeddings = dataset.map(
    lambda example: {'embeddings':model.encode(example['img'], device='cuda')}, batch_size=32)

We can "save" our work by pushing back to the 🤗 hub using push_to_hub.

ds_with_embeddings.push_to_hub('davanstrien/embellishments-sample', private=True)
Pushing split train to the Hub.
The repository already exists: the `private` keyword argument will be ignored.

If we were to move to a different machine we could grab our work again by loading it from the hub 😃

from datasets import load_dataset

ds_with_embeddings = load_dataset("davanstrien/embellishments-sample", use_auth_token=True)
Using custom data configuration davanstrien--embellishments-sample-1de6c6c3e5c828dc
Reusing dataset parquet (/root/.cache/huggingface/datasets/parquet/davanstrien--embellishments-sample-1de6c6c3e5c828dc/0.0.0/0b6d5799bb726b24ad7fc7be720c170d8e497f575d02d47537de9a5bac074901)

We now have a new column which contains the embeddings for our images. We could manually search through these and compare it to some input embedding but datasets has an add_faiss_index method. This uses the faiss library to create an efficient index for searching embeddings. For more background on this library you can watch this YouTube video

ds_with_embeddings['train'].add_faiss_index(column='embeddings')
Dataset({
    features: ['fname', 'year', 'path', 'img', 'embeddings'],
    num_rows: 10000
})

Note that these examples were generated from the full version of the dataset so you may get slightly different results.

We now have everything we need to create a simple image search. We can use the same model we used to encode our images to encode some input text. This will act as the prompt we try and find close examples for. Let's start with 'a steam engine'.

prompt = model.encode("A steam engine")

We can see what this looks like

prompt.shape
(512,)

We can use another method from the datasets library get_nearest_examples to get images which have an embedding close to our input prompt embedding. We can pass in a number of results we want to get back.

scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt,k=9)

We can index into the first example this retrieves:

retrieved_examples['img'][0]

This isn't quite a steam engine but it's also not a completely weird result. We can plot the other results to see what was returned.

import matplotlib.pyplot as plt
plt.figure(figsize=(20, 20))
columns = 3
for i in range(9):
    image = retrieved_examples['img'][i]
    plt.subplot(9 / columns + 1, columns, i + 1)
    plt.imshow(image)

Some of these results look fairly close to our input prompt. We can wrap this in a function so can more easily play around with different prompts

def get_image_from_text(text_prompt, number_to_retrieve=9):
    prompt = model.encode(text_prompt)
    scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt,k=number_to_retrieve)
    plt.figure(figsize=(20, 20))
    columns = 3
    for i in range(9):
        image = retrieved_examples['img'][i]
        plt.title(text_prompt)
        plt.subplot(9 / columns + 1, columns, i + 1)
        plt.imshow(image)
get_image_from_text("An illustration of the sun behind a mountain")

Trying a bunch of prompts ✨

Now we have a function for getting a few results we can try a bunch of different prompts:

  • For some of these I'll choose prompts which are a broad 'category' i.e. 'a musical instrument' or 'an animal', others are specific i.e. 'a guitar'.

  • Out of interest I also tried a boolean operator: "An illustration of a cat or a dog".

  • Finally I tried something a little more abstract: "an empty abyss"

prompts = ["A musical instrument", "A guitar", "An animal", "An illustration of a cat or a dog", "an empty abyss"]
for prompt in prompts:
    get_image_from_text(prompt)

We can see these results aren't always right but they are usually some reasonable results in there. It already seems like this could be useful for searching for a the semantic content of an image in this dataset. However we might hold off on sharing this as is...

Creating a huggingface space? 🤷🏼

One obvious next step for this kind of project is to create a hugginface spaces demo. This is what I've done for other models

It was a fairly simple process to get a Gradio app setup from the point we got to here. Here is a screenshot of this app.

However, I'm a little bit vary about making this public straightaway. Looking at the model card for the CLIP model we can look at the primary intended uses:

Primary intended uses

We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. source This is fairly close to what we are interested in here. Particularly we might be interested in how well the model deals with the kinds of images in our dataset (illustrations from mostly 19th century books). The images in our dataset are (probably) fairly different from the training data. The fact that some of the images also contain text might help CLIP since it displays some OCR ability.

However, looking at the out-of-scope use cases in the model card:

Out-of-Scope Use Cases

Any deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. source suggests that 'deployment' is not a good idea. Whilst the results I got are interesting I haven't played around with the model enough yet (and haven't done anything more systematic to evaluate its performance and biases). Another additional consideration is the target dataset itself. The images are drawn from books covering a variety of subjects and time periods. There are plenty of books which represent colonial attitudes and as a result some of the images included may represent certain groups of people in a negative way. This could potentially be a bad combo with a tool which allows any arbitrary text input to be encoded as a prompt.

There may be ways around this issue but this will require a bit more thought.

Conclusion

Although we don't have a nice demo to show for it, we've seen how we can use datasets to:

  • load images into the new Image feature type
  • 'save' our work using push_to_hub and use this to move data between machines/sessions
  • create a faiss index for images that we can use to retrieve images from a text (or image) input